Introduction
When an IT department considers the transition from on-premises hardware to cloud solutions, they often encounter a dilemma: adopting cloud-native services or opting for a lift-and-shift strategy. While serverless architectures can lower costs and simplify operations, they may disrupt the workflows of end users. Conversely, a lift-and-shift migration can maintain existing processes but fails to alleviate the management burden of server infrastructure and may not optimize costs. Seeking a solution that meets their needs without compromising on user experience, the team at the University of St. Thomas, Minnesota, devised an innovative hybrid strategy that combines cost savings, operational agility, and minimal disruption for web authors.
The Situation
The University of St. Thomas aimed to minimize the management of on-premises hardware for their university website while enhancing its availability by moving to the cloud. The existing setup relied on an IIS server managed by the IT department, with content being created by staff across various departments using two distinct Content Management Systems (CMS). The current publishing workflow was effective, and there was little interest in changing the established development and content management processes.
The IT team envisioned a serverless solution utilizing only Amazon Simple Storage Service (S3) for hosting static website content. This approach would not only cut costs but also eliminate web server management. However, only one of the CMS platforms supported direct publishing to S3.
A lift-and-shift strategy would involve migrating the website to an IIS server on Amazon Elastic Compute Cloud (EC2) and adapting the publishing workflow accordingly. This would avoid disruption for content authors but fail to achieve the desired cost-effectiveness and management simplicity.
Rather than abandoning their goal for a cloud-native architecture, the team navigated the constraints of the existing systems to find a solution.
Solution
Achieving cost efficiency, ease of management, and high availability required the use of S3 to store the website’s content (#1 in the diagram). If both CMS tools could publish directly to S3, the solution would be straightforward. However, since only one could do so, the team launched a t3.small EC2 instance (#2) to act as an intermediary between the CMS tools and the S3 bucket designated for website content. Initially, the plan involved two simple file synchronization processes to keep the EC2 instance updated with the CMS files. However, they encountered an issue where one sync process inadvertently deleted the outputs from the other CMS.
To resolve this, the team established distinct website roots in the EC2 file system for each CMS to synchronize their outputs. By leveraging Unionfs, a Linux utility that merges multiple directories into a single logical path, they created a unified root folder for the website (#3), which could then be easily pushed to S3 using the S3 CLI.
With this framework in place, the team successfully devised an architecture for their website that was almost as economical as a static site hosted on S3, all while preserving the tools and processes familiar to their authors.
One final technical hurdle remained: the IIS site contained internal metadata that redirected users from virtual directories to actual content elsewhere on the site. For instance, https://…./law might redirect to https://…./lawschool/. To replicate this functionality, the IT team generated an HTML file for each redirect and added them to a third website root directory on the EC2 instance (#4). These files included the necessary static HTML headers to redirect user browsers to the correct endpoints. By integrating this directory with the others through Unionfs, they created a single logical representation of the website’s content, which could be synchronized to S3 via a simple S3 sync CLI command.
As a final enhancement, they implemented an Amazon CloudFront distribution (#5) to cache the website’s contents, improving response times for visitors. The distribution object’s caching TTLs were set to defaults. The publishing process ran every 15 minutes, so to ensure website visitors received the latest content, the team wrote an AWS Lambda function (#6) to invalidate the cache whenever an object was added or removed from the S3 bucket using S3 event notifications.
Conclusion
The IT team at the University of St. Thomas found an inventive solution for their university website that reduced server management efforts, achieved operational simplicity and cost savings through cloud-native services, and maintained the authoring tools and processes their users valued. The combination of server-based and serverless components in their design illustrates the adaptability of cloud architectures and showcases the team’s creativity. For more insights on effective cloud solutions, check out this blog post. For authoritative perspectives on this topic, consider visiting this link. Additionally, an excellent resource on Amazon’s new hire orientation can be found here.
Amazon IXD – VGT2, 6401 E Howdy Wells Ave, Las Vegas, NV 89115.
Leave a Reply